skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Tell, Roei"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Two fundamental problems on directed graphs are to decide s-t connectivity, and to estimate the behavior of random walks. Currently, there is no known algorithm for s-t connectivity running in polynomial time and no(1) space, and no known algorithm for estimating the n-step random walk matrix running in non-deterministic logspace. We show that for every directed graph, at least one of these problems is solvable in time and space that significantly improve on the respective state-of-the-art. In particular, there is a pair of algorithms A1 and A2 such that for every graph G, either: A1(G) outputs the transitive closure of G in polynomial time and polylogarithmic space. A2(G) outputs an approximation of the n-step random walk matrix of G in non-deterministic logspace. As one application, we show surprisingly tight win-win results for space-bounded complexity. For example, for certain parameter regimes, either Savitch’s theorem can be non-trivially sped up, or randomized space can be almost completely derandomized. We also apply our techniques to significantly weaken the assumptions required to derandomize space-bounded computation, and to make non-deterministic space-bounded computation unambiguous. Specifically, we deduce such conclusions from lower bounds against uniform circuits of polynomial size, which is an exponential improvement on the required hardness in previous works (Doron–Pyne–Tell STOC 2024, Li–Pyne–Tell FOCS 2024). We further show similar results for minimal-memory derandomization (Doron–Tell CCC 2024). To prove these results, we substantially improve the array of technical tools introduced in recent years for studying hardness-vs.-randomness for bounded-space computation. In particular, we develop derandomized distinguish-to-predict transformations for new types of distinguishers (corresponding to compositions of PRGs with weak distinguishers), we construct a derandomized logspace reconstruction procedure for the Shaltiel–Umans generator (JACM 2005) that can compress hard truth-tables to polylogarithmic size, and we design a version of the Chen–Tell generator (FOCS 2021) that is particularly suitable for the space-bounded setting. 
    more » « less
  2. Srinivasan, Srikanth (Ed.)
    {"Abstract":["The question of optimal derandomization, introduced by Doron et. al (JACM 2022), garnered significant recent attention. Works in recent years showed conditional superfast derandomization algorithms, as well as conditional impossibility results, and barriers for obtaining superfast derandomization using certain black-box techniques. \r\nOf particular interest is the extreme high-end, which focuses on "free lunch" derandomization, as suggested by Chen and Tell (FOCS 2021). This is derandomization that incurs essentially no time overhead, and errs only on inputs that are infeasible to find. Constructing such algorithms is challenging, and so far there have not been any results following the one in their initial work. In their result, their algorithm is essentially the classical Nisan-Wigderson generator, and they relied on an ad-hoc assumption asserting the existence of a function that is non-batch-computable over all polynomial-time samplable distributions.\r\nIn this work we deduce free lunch derandomization from a variety of natural hardness assumptions. In particular, we do not resort to non-batch-computability, and the common denominator for all of our assumptions is hardness over all polynomial-time samplable distributions, which is necessary for the conclusion. The main technical components in our proofs are constructions of new and superfast targeted generators, which completely eliminate the time overheads that are inherent to all previously known constructions. In particular, we present an alternative construction for the targeted generator by Chen and Tell (FOCS 2021), which is faster than the original construction, and also more natural and technically intuitive.\r\nThese contributions significantly strengthen the evidence for the possibility of free lunch derandomization, distill the required assumptions for such a result, and provide the first set of dedicated technical tools that are useful for studying the question."]} 
    more » « less
  3. This paper revisits the study of two classical technical tools in theoretical computer science: Yao's trans-formation of distinguishers to next-bit predictors (FOCS 1982), and the “reconstruction paradigm” in pseudorandomness (e.g., as in Nisan and Wigderson, JCSS 1994). Recent works of Pyne, Raz, and Zhan (FOCS 2023) and Doron, Pyne, and Tell (STOC 2024) showed that both of these tools can be derandomized in the specific context of read-once branching programs (ROBPs), but left open the question of de randomizing them in more general settings. Our main contributions give appealing evidence that derandomization of the two tools is possible in general settings, show surprisingly strong consequences of such derandomization, and reveal several new settings where such derandomization is unconditionally possible for algorithms stronger than ROBPs (with useful consequences). Specifically: •We show that derandomizing these tools is equivalent to general derandomization. Specifically, we show that derandomizing distinguish - to- predict transformations is equivalent to prBPP=prP, and that derandomized reconstruction procedures (in a more general sense that we introduce) is equivalent to prBPP=prZPP. These statements hold even when scaled down to weak circuit classes and to algorithms that run in super-polynomial time. •Our main technical contributions are unconditional constructions of derandomized versions of Yao's transformation (or reductions of this task to other problems) for classes and for algorithms beyond ROBPs. Consequently, we deduce new results: A significant relaxation of the hypotheses required to derandomize the isolation lemma for logspace algorithms and deduce that NL=UL; and proofs that de-randomization necessitates targeted PRGs in catalytic logspace (unconditionally) and in logspace (conditionally). In addition, we introduce a natural subclass of prZPP that has been implicitly studied in recent works (Korten FOCS 2021, CCC 2022): The class of problems reducible to a problem called “Lossy Code”. We provide a structural characterization for this class in terms of derandomized reconstruction procedures, and show that this characterization is robust to several natural variations. Lastly, we present alternative proofs for classical results in the theory of pseudorandomness (such as two-sided derandomization reducing to one-sided), relying on the notion of deterministically transforming distinguishers to predictors as the main technical tool. 
    more » « less
  4. We provide compelling evidence for the potential of hardness-vs.-randomness approaches to make progress on the long-standing problem of derandomizing space-bounded computation. Our first contribution is a derandomization of bounded-space machines from hardness assumptions for classes of uniform deterministic algorithms, for which strong (but non-matching) lower bounds can be unconditionally proved. We prove one such result for showing that BPL=L “on average”, and another similar result for showing that BPSPACE[O(n)]=DSPACE[O(n)]. Next, we significantly improve the main results of prior works on hardness-vs.-randomness for logspace. As one of our results, we relax the assumptions needed for derandomization with minimal memory footprint (i.e., showing BPSPACE[S]⊆ DSPACE[c · S] for a small constant c), by completely eliminating a cryptographic assumption that was needed in prior work. A key contribution underlying all of our results is non-black-box use of the descriptions of space-bounded Turing machines, when proving hardness-to-randomness results. That is, the crucial point allowing us to prove our results is that we use properties that are specific to space-bounded machines. 
    more » « less
  5. The Exponential-Time Hypothesis ( \(\mathtt {ETH} \) ) is a strengthening of the \(\mathcal {P} \ne \mathcal {NP} \) conjecture, stating that \(3\text{-}\mathtt {SAT} \) on n variables cannot be solved in (uniform) time 2 ϵ · n , for some ϵ > 0. In recent years, analogous hypotheses that are “exponentially-strong” forms of other classical complexity conjectures (such as \(\mathcal {NP}\nsubseteq \mathcal {BPP} \) or \(co\mathcal {NP}\nsubseteq \mathcal {NP} \) ) have also been introduced, and have become widely influential. In this work, we focus on the interaction of exponential-time hypotheses with the fundamental and closely-related questions of derandomization and circuit lower bounds . We show that even relatively-mild variants of exponential-time hypotheses have far-reaching implications to derandomization, circuit lower bounds, and the connections between the two. Specifically, we prove that: (1) The Randomized Exponential-Time Hypothesis ( \(\mathsf {rETH} \) ) implies that \(\mathcal {BPP} \) can be simulated on “average-case” in deterministic (nearly-)polynomial-time (i.e., in time \(2^{\tilde{O}(\log (n))}=n^{\mathrm{loglog}(n)^{O(1)}} \) ). The derandomization relies on a conditional construction of a pseudorandom generator with near-exponential stretch (i.e., with seed length \(\tilde{O}(\log (n)) \) ); this significantly improves the state-of-the-art in uniform “hardness-to-randomness” results, which previously only yielded pseudorandom generators with sub-exponential stretch from such hypotheses. (2) The Non-Deterministic Exponential-Time Hypothesis ( \(\mathsf {NETH} \) ) implies that derandomization of \(\mathcal {BPP} \) is completely equivalent to circuit lower bounds against \(\mathcal {E} \) , and in particular that pseudorandom generators are necessary for derandomization. In fact, we show that the foregoing equivalence follows from a very weak version of \(\mathsf {NETH} \) , and we also show that this very weak version is necessary to prove a slightly stronger conclusion that we deduce from it. Lastly, we show that disproving certain exponential-time hypotheses requires proving breakthrough circuit lower bounds. In particular, if \(\mathtt {CircuitSAT} \) for circuits over n bits of size poly( n ) can be solved by probabilistic algorithms in time 2 n /polylog( n ) , then \(\mathcal {BPE} \) does not have circuits of quasilinear size. 
    more » « less
  6. We present new constructions of pseudorandom generators (PRGs) for two of the most widely studied non-uniform circuit classes in complexity theory. Our main result is a construction of the first non-trivial PRG for linear threshold (LTF) circuits of arbitrary constant depth and super-linear size. This PRG fools circuits with depth d∈N and n1+δ wires, where δ=2−O(d) , using seed length O(n1−δ) and with error 2−nδ . This tightly matches the best known lower bounds for this circuit class. As a consequence of our result, all the known hardness for LTF circuits has now effectively been translated into pseudorandomness. This brings the extensive effort in the last decade to construct PRGs and deterministic circuit-analysis algorithms for this class to the point where any subsequent improvement would yield breakthrough lower bounds. Our second contribution is a PRG for De Morgan formulas of size s whose seed length is s1/3+o(1)⋅polylog(1/ϵ) for error ϵ . In particular, our PRG can fool formulas of sub-cubic size s=n3−Ω(1) with an exponentially small error ϵ=exp(−nΩ(1)) . This significantly improves the inverse-polynomial error of the previous state-of-the-art for such formulas by Impagliazzo, Meka, and Zuckerman (FOCS 2012, JACM 2019), and again tightly matches the best currently-known lower bounds for this class. In both settings, a key ingredient in our constructions is a pseudorandom restriction procedure that has tiny failure probability, but simplifies the function to a non-natural “hybrid computational model” that combines several computational models. 
    more » « less
  7. Byrka, Jarosław; Meka, Raghu (Ed.)